On Bias Plus Variance

نویسنده

  • David Wolpert
چکیده

This paper presents a Bayesian additive “correction” to the familiar quadratic loss biasplus-variance formula. It then discusses some other loss-function-specific aspects of supervised learning. It ends by presenting a version of the bias-plus-variance formula appropriate for log loss, and then the Bayesian additive correction to that formula. Both the quadratic loss and log loss correction terms are a covariance, between the learning algorithm and the posterior distribution over targets. Accordingly, in the context in which those terms apply, there is not a “bias-variance tradeoff”, or a “bias-variance dilemma”, as one often hears. Rather there is a bias-variance-covariance trade-off.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Bias Plus Variance Decomposition for Zero-One Loss Functions

We present a bias variance decomposition of expected misclassi cation rate the most commonly used loss function in supervised classi cation learning The bias variance decomposition for quadratic loss functions is well known and serves as an important tool for analyzing learning algorithms yet no decomposition was o ered for the more commonly used zero one misclassi cation loss functions until t...

متن کامل

Bias Plus Variance Decomposition for Survival Analysis Problems

Bias variance decomposition of the expected error defined for regression and classification problems is an important tool to study and compare different algorithms, to find the best areas for their application. Here the decomposition is introduced for the survival analysis problem. In our experiments, we study bias -variance parts of the expected error for two algorithms: original Cox proportio...

متن کامل

Machine Learning : Proceedings of the Thirteenth International Conference , 1996 Bias Plus Variance Decomposition forZero - One Loss

We present a bias-variance decomposition of expected misclassiication rate, the most commonly used loss function in supervised classiication learning. The bias-variance decomposition for quadratic loss functions is well known and serves as an important tool for analyzing learning algorithms, yet no decomposition was ooered for the more commonly used zero-one (misclassiication) loss functions un...

متن کامل

Second-Order Experimental Designs for Simulation Metamodeling

The main purpose of this study is to compare the performance of a group of second-order designs such as Box-Behnken, face-center cube, three-level factorial, central composite, minimum bias, and minimum variance plus bias for estimating a quadratic metamodel. A time-shared computer system is used to demonstrate the ability of the designs in providing good fit of the metamodel to the simulation ...

متن کامل

Empirical-bias bandwidths for local polynomial nonparametric regression and density estimation

A data-based local bandwidth selector is proposed for nonparametric regression by local tting of polynomials. The estimator, called the empirical-bias bandwidth selector (EBBS), is rather simple and easily allows multivariate predictor variables and estimation of any order derivative of the regression function. EBBS minimizes an estimate of mean square error consisting of a squared bias term pl...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Neural Computation

دوره 9  شماره 

صفحات  -

تاریخ انتشار 1997